Goto

Collaborating Authors

 cancer site


Nanobot Algorithms for Treatment of Diffuse Cancer

Harasha, Noble, Lynch, Nancy

arXiv.org Artificial Intelligence

Motile nanosized particles, or "nanobots", promise more effective and less toxic targeted drug delivery because of their unique scale and precision. We consider the case in which the cancer is "diffuse", dispersed such that there are multiple distinct cancer sites. We investigate the problem of a swarm of nanobots locating these sites and treating them by dropping drug payloads at the sites. To improve the success of the treatment, the drug payloads must be allocated between sites according to their "demands"; this requires extra nanobot coordination. We present a mathematical model of the behavior of the nanobot agents and of their colloidal environment. This includes a movement model for agents based upon experimental findings from actual nanoparticles in which bots noisily ascend and descend chemical gradients. We present three algorithms: The first algorithm, called KM, is the most representative of reality, with agents simply following naturally existing chemical signals that surround each cancer site. The second algorithm, KMA, includes an additional chemical payload which amplifies the existing natural signals. The third algorithm, KMAR, includes another additional chemical payload which counteracts the other signals, instead inducing negative chemotaxis in agents such that they are repelled from sites that are already sufficiently treated. We present simulation results for all algorithms across different types of cancer arrangements. For KM, we show that the treatment is generally successful unless the natural chemical signals are weak, in which case the treatment progresses too slowly. For KMA, we demonstrate a significant improvement in treatment speed but a drop in eventual success, except for concentrated cancer patterns. For KMAR, our results show great performance across all types of cancer patterns, demonstrating robustness and adaptability.


Modeling Feasible Locomotion of Nanobots for Cancer Detection and Treatment

Harasha, Noble, Gava, Cristina, Lynch, Nancy, Contini, Claudia, Mallmann-Trenn, Frederik

arXiv.org Artificial Intelligence

Deploying motile nanosized particles, also known as ``nanobots'', in the human body promises to improve selectivity in drug delivery and reduce side effects. We consider a swarm of nanobots locating a single cancerous region and treating it by releasing an onboard payload of drugs at the site. At nanoscale, the computation, communication, sensing, and locomotion capabilities of individual agents are extremely limited, noisy, and/or nonexistent. We present a general model to formally describe the individual and collective behavior of agents in a colloidal environment, such as the bloodstream, for cancer detection and treatment by nanobots. This includes a feasible and precise model of agent locomotion, inspired by actual nanoparticles that, in the presence of an external chemical gradient, move towards areas of higher concentration by means of self-propulsion. We present two variants of our general model: The first assumes an endogenous chemical gradient that is fixed over time and centered at the targeted cancer site; the second is a more speculative and dynamic variant in which agents themselves create and amplify a chemical gradient centered at the cancer site. In both settings, agents can sense the gradient and ascend it noisily, locating the cancer site more quickly than via simple Brownian motion. For the first variant of the model, we present simulation results to show the behavior of agents under our locomotion model, as well as {analytical results} to bound the time it takes for the agents to reach the cancer site. For the second variant, simulation results highlight the collective benefit in having agents issue their own chemical signal. While arguably more speculative in its agent capability assumptions, this variant shows a significant improvement in runtime performance over the first variant, resulting from its chemical signal amplification mechanism.


Foundation Artificial Intelligence Models for Health Recognition Using Face Photographs (FAHR-Face)

Haugg, Fridolin, Lee, Grace, He, John, Nürnberg, Leonard, Bontempi, Dennis, Bitterman, Danielle S., Catalano, Paul, Prudente, Vasco, Glubokov, Dmitrii, Warrington, Andrew, Pai, Suraj, De Ruysscher, Dirk, Guthier, Christian, Kann, Benjamin H., Gladyshev, Vadim N., Aerts, Hugo JWL, Mak, Raymond H.

arXiv.org Artificial Intelligence

Background: Facial appearance offers a noninvasive window into health. We built FAHR-Face, a foundation model trained on >40 million facial images and fine-tuned it for two distinct tasks: biological age estimation (FAHR-FaceAge) and survival risk prediction (FAHR-FaceSurvival). Methods: FAHR-FaceAge underwent a two-stage, age-balanced fine-tuning on 749,935 public images; FAHR-FaceSurvival was fine-tuned on 34,389 photos of cancer patients. Model robustness (cosmetic surgery, makeup, pose, lighting) and independence (saliency mapping) was tested extensively. Both models were clinically tested in two independent cancer patient datasets with survival analyzed by multivariable Cox models and adjusted for clinical prognostic factors. Findings: For age estimation, FAHR-FaceAge had the lowest mean absolute error of 5.1 years on public datasets, outperforming benchmark models and maintaining accuracy across the full human lifespan. In cancer patients, FAHR-FaceAge outperformed a prior facial age estimation model in survival prognostication. FAHR-FaceSurvival demonstrated robust prediction of mortality, and the highest-risk quartile had more than triple the mortality of the lowest (adjusted hazard ratio 3.22; P<0.001). These findings were validated in the independent cohort and both models showed generalizability across age, sex, race and cancer subgroups. The two algorithms provided distinct, complementary prognostic information; saliency mapping revealed each model relied on distinct facial regions. The combination of FAHR-FaceAge and FAHR-FaceSurvival improved prognostic accuracy. Interpretation: A single foundation model can generate inexpensive, scalable facial biomarkers that capture both biological ageing and disease-related mortality risk. The foundation model enabled effective training using relatively small clinical datasets.


ModalTune: Fine-Tuning Slide-Level Foundation Models with Multi-Modal Information for Multi-task Learning in Digital Pathology

Ramanathan, Vishwesh, Xu, Tony, Pati, Pushpak, Ahmed, Faruk, Goubran, Maged, Martel, Anne L.

arXiv.org Artificial Intelligence

Prediction tasks in digital pathology are challenging due to the massive size of whole-slide images (WSIs) and the weak nature of training signals. Advances in computing, data availability, and self-supervised learning (SSL) have paved the way for slide-level foundation models (SLFMs) that can improve prediction tasks in low-data regimes. However, working with these models is challenging, with issues such as catastrophic forgetting during fine-tuning and under-utilization of shared information between tasks and modalities. To overcome these two challenges, we propose ModalTune, a novel fine-tuning framework which introduces the Modal Adapter to integrate new modalities without modifying SLFM weights. Additionally, we use large-language models (LLMs) to encode labels as text, capturing semantic relationships and enhancing generalization across multiple tasks and cancer types in a single training recipe. ModalTune achieves state-of-the-art (SOTA) results against both uni-modal and multi-modal models across four cancer types, jointly improving survival and cancer subtype prediction while remaining competitive in pan-cancer settings. Additionally, we show ModalTune is highly generalizable to two out-of-distribution (OOD) datasets. To our knowledge, this is the first unified fine-tuning framework for multi-modal, multi-task, and pan-cancer modeling in digital pathology.


HEALNet -- Hybrid Multi-Modal Fusion for Heterogeneous Biomedical Data

Hemker, Konstantin, Simidjievski, Nikola, Jamnik, Mateja

arXiv.org Artificial Intelligence

Technological advances in medical data collection such as high-resolution histopathology and high-throughput genomic sequencing have contributed to the rising requirement for multi-modal biomedical modelling, specifically for image, tabular, and graph data. Most multi-modal deep learning approaches use modality-specific architectures that are trained separately and cannot capture the crucial cross-modal information that motivates the integration of different data sources. This paper presents the Hybrid Early-fusion Attention Learning Network (HEALNet): a flexible multi-modal fusion architecture, which a) preserves modality-specific structural information, b) captures the cross-modal interactions and structural information in a shared latent space, c) can effectively handle missing modalities during training and inference, and d) enables intuitive model inspection by learning on the raw data input instead of opaque embeddings. We conduct multi-modal survival analysis on Whole Slide Images and Multi-omic data on four cancer cohorts of The Cancer Genome Atlas (TCGA). HEALNet achieves state-of-the-art performance, substantially improving over both uni-modal and recent multi-modal baselines, whilst being robust in scenarios with missing modalities.


Revolutionary AI model for cancer diagnosis

#artificialintelligence

Professor Park Sang-Hyun of the Department of Robotics and Mechatronics Engineering has led a research team in developing a weakly supervised deep learning model that can accurately locate cancer in pathological images based only on data where the cancer is present. Existing deep learning models needed to construct a dataset to specify the cancer site whereas the model developed in this study improved efficiency. It is hoped that it will make a significant contribution to the relevant research field. Generally, locating the cancer site involves zoning which is time-consuming and increases cost so the new AI model looks to be promising. The weakly supervised learning model that zones cancer sites with only rough data such as'whether the cancer in the image is present or not' is under active study.